Anthropic has announced an expansion of its vulnerability reward program aimed at testing a 'next-generation AI safety mitigation system,' primarily focusing on identifying and defending against 'universal jailbreak attacks.' Special attention is given to high-risk areas, including CBRN defense and cybersecurity. Participants will have the opportunity to engage with the new safety system ahead of time, identifying vulnerabilities or bypassing security measures, with rewards of up to $15,000. This initiative aims to enhance the safety of AI systems by attracting security researchers to collaboratively discover and fix potential threats, setting a benchmark for safety in the AI industry.